- Title
- Incremental training of first order recurrent neural networks to predict a context-sensitive language
- Creator
- Chalup, Stephan K.; Blair, Alan D.
- Relation
- Neural Networks Vol. 16, Issue 7, p. 955-972
- Publisher Link
- http://dx.doi.org/10.1016/S0893-6080(03)00054-6
- Publisher
- Pergamon / Elsevier Ltd.
- Resource Type
- journal article
- Date
- 2003
- Description
- In recent years it has been shown that first order recurrent neural networks trained by gradient-descent can learn not only regular but also simple context-free and context-sensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of evolutionary hill climbing with incremental learning and a well-balanced training set enables first order recurrent networks to reliably learn context-free and mildly context-sensitive languages. In particular, we trained the networks to predict symbols in string sequences of the context-sensitive language {aⁿbⁿcⁿ; n≥1}. Comparative experiments with and without incremental learning indicated that incremental learning can accelerate and facilitate training. Furthermore, incrementally trained networks generally resulted in monotonic trajectories in hidden unit activation space, while the trajectories of non-incrementally trained networks were oscillating. The non-incrementally trained networks were more likely to generalise.
- Subject
- mildly context-sensitive language; incremental learning; simple recurrent neural network; evolutionary algorithm; hidden unit dynamics
- Identifier
- http://hdl.handle.net/1959.13/27498
- Identifier
- uon:1732
- Identifier
- ISSN:0893-6080
- Language
- eng
- Full Text
- Reviewed
- Hits: 2061
- Visitors: 2932
- Downloads: 479
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | SOURCE1 | Author final version | 397 KB | Adobe Acrobat PDF | View Details Download |